7 research outputs found

    Experimental Evaluation of a UWB-Based Cooperative Positioning System for Pedestrians in GNSS-Denied Environment

    Get PDF
    Cooperative positioning (CP) utilises information sharing among multiple nodes to enable positioning in Global Navigation Satellite System (GNSS)-denied environments. This paper reports the performance of a CP system for pedestrians using Ultra-Wide Band (UWB) technology in GNSS-denied environments. This data set was collected as part of a benchmarking measurement campaign carried out at the Ohio State University in October 2017. Pedestrians were equipped with a variety of sensors, including two different UWB systems, on a specially designed helmet serving as a mobile multi-sensor platform for CP. Different users were walking in stop-and-go mode along trajectories with predefined checkpoints and under various challenging environments. In the developed CP network, both Peer-to-Infrastructure (P2I) and Peer-to-Peer (P2P) measurements are used for positioning of the pedestrians. It is realised that the proposed system can achieve decimetre-level accuracies (on average, around 20 cm) in the complete absence of GNSS signals, provided that the measurements from infrastructure nodes are available and the network geometry is good. In the absence of these good conditions, the results show that the average accuracy degrades to meter level. Further, it is experimentally demonstrated that inclusion of P2P cooperative range observations further enhances the positioning accuracy and, in extreme cases when only one infrastructure measurement is available, P2P CP may reduce positioning errors by up to 95%. The complete test setup, the methodology for development, and data collection are discussed in this paper. In the next version of this system, additional observations such as the Wi-Fi, camera, and other signals of opportunity will be included

    Object Tracking with LiDAR: Monitoring Taxiing and Landing Aircraft

    No full text
    Mobile light detection and ranging (LiDAR) sensors used in car navigation and robotics, such as the Velodyne’s VLP-16 and HDL-32E, allow for sensing the surroundings of the platform with high temporal resolution to detect obstacles, tracking objects and support path planning. This study investigates the feasibility of using LiDAR sensors for tracking taxiing or landing aircraft close to the ground to improve airport safety. A prototype system was developed and installed at an airfield to capture point clouds to monitor aircraft operations. One of the challenges of accurate object tracking using the Velodyne sensors is the relatively small vertical field of view (30°, 41.3°) and angular resolution (1.33°, 2°), resulting in a small number of points of the tracked object. The point density decreases with the object–sensor distance, and is already sparse at a moderate range of 30–40 m. The paper introduces our model-based tracking algorithms, including volume minimization and cube trajectories, to address the optimal estimation of object motion and tracking based on sparse point clouds. Using a network of sensors, multiple tests were conducted at an airport to assess the performance of the demonstration system and the algorithms developed. The investigation was focused on monitoring small aircraft moving on runways and taxiways, and the results indicate less than 0.7 m/s and 17 cm velocity and positioning accuracy achieved, respectively. Overall, based on our findings, this technology is promising not only for aircraft monitoring but for airport applications

    Object Tracking with LiDAR: Monitoring Taxiing and Landing Aircraft

    No full text
    Mobile light detection and ranging (LiDAR) sensors used in car navigation and robotics, such as the Velodyne’s VLP-16 and HDL-32E, allow for sensing the surroundings of the platform with high temporal resolution to detect obstacles, tracking objects and support path planning. This study investigates the feasibility of using LiDAR sensors for tracking taxiing or landing aircraft close to the ground to improve airport safety. A prototype system was developed and installed at an airfield to capture point clouds to monitor aircraft operations. One of the challenges of accurate object tracking using the Velodyne sensors is the relatively small vertical field of view (30°, 41.3°) and angular resolution (1.33°, 2°), resulting in a small number of points of the tracked object. The point density decreases with the object–sensor distance, and is already sparse at a moderate range of 30–40 m. The paper introduces our model-based tracking algorithms, including volume minimization and cube trajectories, to address the optimal estimation of object motion and tracking based on sparse point clouds. Using a network of sensors, multiple tests were conducted at an airport to assess the performance of the demonstration system and the algorithms developed. The investigation was focused on monitoring small aircraft moving on runways and taxiways, and the results indicate less than 0.7 m/s and 17 cm velocity and positioning accuracy achieved, respectively. Overall, based on our findings, this technology is promising not only for aircraft monitoring but for airport applications

    Reduction Method for Mobile Laser Scanning Data

    No full text
    Mobile Laser Scanning (MLS) technology acquires a huge volume of data in a very short time. In many cases, it is reasonable to reduce the size of the dataset with eliminating points in such a way that the datasets, after reduction, meet specific optimization criteria. Various methods exist to decrease the size of point cloud, such as raw data reduction, Digital Terrain Model (DTM) generalization or generation of regular grid. These methods have been successfully applied on data captured from Airborne Laser Scanning (ALS) and Terrestrial Laser Scanning (TLS), however, they have not been fully analyzed on data captured by an MLS system. The paper presents our new approach, called the Optimum Single MLS Dataset method (OptD-single-MLS), which is an algorithm for MLS data reduction. The tests were carried out in two variants: (1) for raw sensory measurements and (2) for a georeferenced 3D point cloud. We found that the OptD-single-MLS method provides a good solution in both variants; therefore, the choice of the reduction variant depends only on the user

    A Benchmarking Measurement Campaign to Support Ubiquitous Localization in GNSS Denied and Indoor Environments

    No full text
    Localization in GNSS-denied/challenged indoor/outdoor and transitional environments represents a challenging research problem. As part of the joint IAG/FIG Working Groups 4.1.1 and 5.5 on Multi-sensor Systems, a benchmarking measurement campaign was conducted at The Ohio State University. Initial experiments have demonstrated that Cooperative Localization (CL) is extremely useful for positioning and navigation of platforms navigating in swarms or networks. In the data acquisition campaign, multiple sensor platforms, including vehicles, bicyclists and pedestrians were equipped with combinations of GNSS, Ultra-wide Band (UWB), Wireless Fidelity (Wi-Fi), Raspberry Pi units, cameras, Light Detection and Ranging (LiDAR) and inertial sensors for CL. Pedestrians wore a specially designed helmet equipped with some of these sensors. An overview of the experimental configurations, test scenarios, characteristics and sensor specifications is given. It has been demonstrated that all involved sensor platforms in the different test scenarios have gained a significant increase in positioning accuracy by using ubiquitous user localization. For example, in the indoor environment, success rates of approximately 97% were obtained using Wi-Fi fingerprinting for correctly detecting the room-level location of the user. Using UWB, decimeter-level positioning accuracy is demonstrable achievable under certain conditions. The full sets of data is being made available to the wider research community through the WG on request

    A benchmarking measurement campaign in GNSS-denied/challenged indoor/outdoor and transitional environments

    No full text
    © 2020 Walter de Gruyter GmbH, Berlin/Boston 2020 Localization in GNSS-denied/challenged indoor/outdoor and transitional environments represents a challenging research problem. This paper reports about a sequence of extensive experiments, conducted at The Ohio State University (OSU) as part of the joint effort of the FIG/IAG WG on Multi-sensor Systems. Their overall aim is to assess the feasibility of achieving GNSS-like performance for ubiquitous positioning in terms of autonomous, global, preferably infrastructure-free positioning of portable platforms at affordable cost efficiency. In the data acquisition campaign, multiple sensor platforms, including vehicles, bicyclists and pedestrians were used whereby cooperative positioning (CP) is the major focus to achieve a joint navigation solution. The GPSVan of The Ohio State University was used as the main reference vehicle and for pedestrians, a specially designed helmet was developed. The employed/tested positioning techniques are based on using sensor data from GNSS, Ultra-wide Band (UWB), Wireless Fidelity (Wi-Fi), vison-based positioning with cameras and Light Detection and Ranging (LiDAR) as well as inertial sensors. The experimental and initial results include the preliminary data processing, UWB sensor calibration and Wi-Fi indoor positioning with room-level granularity and platform trajectory determination. The results demonstrate that CP techniques are extremely useful for positioning of platforms navigating in swarms or networks. A significant performance improvement in terms of positioning accuracy and reliability is achieved. Using UWB, decimeter-level positioning accuracy is achievable under typical conditions, such as normal walls, average complexity buildings, etc. Using Wi-Fi fingerprinting, success rates of approximately 97 % were obtained for correctly detecting the room-level location of the user
    corecore